2,998 research outputs found

    Assessing the Use of SAR/Optical Data Fusion and TensorFlow for Improved Mangrove Mapping

    Get PDF
    Mangrove forests are found in intertidal zones of tropical regions around the world and provide important ecological and economic benefits they are considered carbon sequesters, habitats for flora and fauna, and natural barriers to hurricanes and tsunamis. Wood from mangrove forests are used as fuel and building materials in surrounding coastal communities, therefore promoting local livelihoods. Despite the importance of these ecosystems, mangrove forests have historically been degraded in natural processes such as severe weather, and anthropogenic factors like conversion to agriculture and aquaculture. This study assesses change in mangrove forests in Nigeria and Mozambique from 2015 to 2018 using SAR and optical data fusion. Due to frequent cloud cover over the study area, SAR and optical data is fused to obtain gap-free imagery without clouds. Landsat-8 OLI and Sentinel-1 imagery is fused with TensorFlow, an open source platform used in developing machine learning models. The resulting images are classified to discriminate mangrove forest cover from other land cover types, and change is estimated using image differencing. Understanding the rates and magnitude of mangrove change across space and time can aid in identifying priority areas for forest regeneration, and can help construct sustainable management practices for the future

    Assessing Changes in Mangrove Forests in Africa: Quantifying Loss and Identifying Drivers of Change using Landsat-8 OLI

    Get PDF
    The objective of this project is to quantify changes of mangrove extent in Madagascar and Nigeria from 2015-2018. Both countries contain a significant portion of the worlds mangroves, and which are known to be deforested and degraded due to natural and anthropogenic factors. Change is estimated using multi-date Landsat-8 OLI data and cloud computational techniques. Findings show that mangroves in both countries have exhibited areal loss during the study period, but loss varies across space. Understanding the rate and magnitude of mangrove change can aid in identifying priority areas for forest regenerations, and can help construct sustainable management practices for the future

    A new simple six-step model to promote recruitment to RCTs was developed and successfully implemented

    Get PDF
    How a randomised controlled trial (RCT) is explained to patients is a key determinant of recruitment to that trial. This study developed and implemented a simple six-step model to fully inform patients and to support them in deciding whether to take part.Ninety-two consultations with 60 new patients were recorded and analysed during a pilot RCT comparing surgical and non-surgical interventions for hip impingement. Recordings were analysed using techniques of thematic analysis and focused conversation analysis.Early findings supported the development of a simple six-step model to provide a framework for good recruitment practice. Model steps are: 1) Explain the condition; 2) Reassure patients about receiving treatment; 3) Establish uncertainty; 4) Explain the study purpose; 5) Give a balanced view of treatments and 6) Explain study procedures. There are also two elements throughout the consultation: i) Responding to patients' concerns and ii) showing confidence. The pilot study was successful, with 70% (n= 60) of patients approached across 9 centres agreeing to take part in the RCT, so that the full-scale trial was funded.The six-step model provides a promising framework for successful recruitment to RCTs. Further testing of the model is now required

    From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

    Full text link
    Selecting the ``right'' amount of information to include in a summary is a difficult task. A good summary should be detailed and entity-centric without being overly dense and hard to follow. To better understand this tradeoff, we solicit increasingly dense GPT-4 summaries with what we refer to as a ``Chain of Density'' (CoD) prompt. Specifically, GPT-4 generates an initial entity-sparse summary before iteratively incorporating missing salient entities without increasing the length. Summaries generated by CoD are more abstractive, exhibit more fusion, and have less of a lead bias than GPT-4 summaries generated by a vanilla prompt. We conduct a human preference study on 100 CNN DailyMail articles and find that that humans prefer GPT-4 summaries that are more dense than those generated by a vanilla prompt and almost as dense as human written summaries. Qualitative analysis supports the notion that there exists a tradeoff between informativeness and readability. 500 annotated CoD summaries, as well as an extra 5,000 unannotated summaries, are freely available on HuggingFace (https://huggingface.co/datasets/griffin/chain_of_density).Comment: preprin

    Generating EDU Extracts for Plan-Guided Summary Re-Ranking

    Full text link
    Two-step approaches, in which summary candidates are generated-then-reranked to return a single summary, can improve ROUGE scores over the standard single-step approach. Yet, standard decoding methods (i.e., beam search, nucleus sampling, and diverse beam search) produce candidates with redundant, and often low quality, content. In this paper, we design a novel method to generate candidates for re-ranking that addresses these issues. We ground each candidate abstract on its own unique content plan and generate distinct plan-guided abstracts using a model's top beam. More concretely, a standard language model (a BART LM) auto-regressively generates elemental discourse unit (EDU) content plans with an extractive copy mechanism. The top K beams from the content plan generator are then used to guide a separate LM, which produces a single abstractive candidate for each distinct plan. We apply an existing re-ranker (BRIO) to abstractive candidates generated from our method, as well as baseline decoding methods. We show large relevance improvements over previously published methods on widely used single document news article corpora, with ROUGE-2 F1 gains of 0.88, 2.01, and 0.38 on CNN / Dailymail, NYT, and Xsum, respectively. A human evaluation on CNN / DM validates these results. Similarly, on 1k samples from CNN / DM, we show that prompting GPT-3 to follow EDU plans outperforms sampling-based methods by 1.05 ROUGE-2 F1 points. Code to generate and realize plans is available at https://github.com/griff4692/edu-sum.Comment: ACL 202
    corecore